47 research outputs found

    Intégration des modèles Simulink dans le model-checker Cosmos

    Get PDF
    We present an implementation for Simulink model executions in the statistical model-checker Cosmos.We take profit of this implementation for an hybrid modeling combining Petri nets and Simulink models.Nous présentons une implémentation pour l'exécution de modèles Simulink dans le model-checker Cosmos.Cette implémentation est ensuite utilisée pour la simulation de modèles hybrides, combinant des réseaux de Petri et des modèles Simulink

    Bounds Computation for Symmetric Nets

    Get PDF
    Monotonicity in Markov chains is the starting point for quantitative abstraction of complex probabilistic systems leading to (upper or lower) bounds for probabilities and mean values relevant to their analysis. While numerous case studies exist in the literature, there is no generic model for which monotonicity is directly derived from its structure. Here we propose such a model and formalize it as a subclass of Stochastic Symmetric (Petri) Nets (SSNs) called Stochastic Monotonic SNs (SMSNs). On this subclass the monotonicity is proven by coupling arguments that can be applied on an abstract description of the state (symbolic marking). Our class includes both process synchronizations and resource sharings and can be extended to model open or cyclic closed systems. Automatic methods for transforming a non monotonic system into a monotonic one matching the MSN pattern, or for transforming a monotonic system with large state space into one with reduced state space are presented. We illustrate the interest of the proposed method by expressing standard monotonic models and modelling a flexible manufacturing system case study

    Accélérations pour le model checking statistique

    No full text
    In the past decades, the analysis of complex critical systems subject to uncertainty has become more and more important. In particular the quantitative analysis of these systems is necessary to guarantee that their probability of failure is very small. As their state space is extremly large and the probability of interest is very small, typically less than one in a billion, classical methods do not apply for such systems. Model Checking algorithms are used for the analysis of probabilistic systems, they take as input the system and its expected behaviour, and compute the probability with which the system behaves as expected. These algorithms have been broadly studied. They can be divided into two main families: Numerical Model Checking and Statistical Model Checking. The former computes small probabilities accurately by solving linear equation systems, but does not scale to very large systems due to the space size explosion problem. The latter is based on Monte Carlo Simulation and scales well to big systems, but cannot deal with small probabilities. The main contribution of this thesis is the design and implementation of a method combining the two approaches and returning a confidence interval of the probability of interest. This method applies to systems with both continuous and discrete time settings for time-bounded and time-unbounded properties. All the variants of this method rely on an abstraction of the model, this abstraction is analysed by a numerical model checker and the result is used to steer Monte Carlo simulations on the initial model. This abstraction should be small enough to be analysed by numerical methods and precise enough to improve the simulation. This abstraction can be build by the modeller, or alternatively a class of systems can be identified in which an abstraction can be automatically computed. This approach has been implemented in the tool Cosmos, and this method was successfully applied on classical benchmarks and a case study.Ces dernières années, l'analyse de systèmes complexes critiques est devenue de plus en plus importante. En particulier, l'analyse quantitative de tels systèmes est nécessaire afin de pouvoir garantir que leur probabilité d'échec est très faible. La difficulté de l'analyse de ces systèmes réside dans le fait que leur espace d’état est très grand et que la probabilité recherchée est extrêmement petite, de l'ordre d'une chance sur un milliard, ce qui rend les méthodes usuelles inopérantes. Les algorithmes de Model Checking quantitatif sont les algorithmes classiques pour l'analyse de systèmes probabilistes. Ils prennent en entrée le système et son comportement attendu et calculent la probabilité avec laquelle les trajectoires du système correspondent à ce comportement. Ces algorithmes de Model Checking ont été largement étudié depuis leurs créations. Deux familles d'algorithme existent : - le Model Checking numérique qui réduit le problème à la résolution d'un système d'équations. Il permet de calculer précisément des petites probabilités mais soufre du problème d'explosion combinatoire- - le Model Checking statistique basé sur la méthode de Monte-Carlo qui se prête bien à l'analyse de très gros systèmes mais qui ne permet pas de calculer de petite probabilités. La contribution principale de cette thèse est le développement d'une méthode combinant les avantages des deux approches et qui renvoie un résultat sous forme d'intervalles de confiance. Cette méthode s'applique à la fois aux systèmes discrets et continus pour des propriétés bornées ou non bornées temporellement. Cette méthode est basée sur une abstraction du modèle qui est analysée à l'aide de méthodes numériques, puis le résultat de cette analyse est utilisé pour guider une simulation du modèle initial. Ce modèle abstrait doit à la fois être suffisamment petit pour être analysé par des méthodes numériques et suffisamment précis pour guider efficacement la simulation. Dans le cas général, cette abstraction doit être construite par le modélisateur. Cependant, une classe de systèmes probabilistes a été identifiée dans laquelle le modèle abstrait peut être calculé automatiquement. Cette approche a été implémentée dans l'outil Cosmos et des expériences sur des modèles de référence ainsi que sur une étude de cas ont été effectuées, qui montrent l'efficacité de la méthode. Cette approche à été implanté dans l'outils Cosmos et des expériences sur des modèles de référence ainsi que sur une étude de cas on été effectué, qui montre l'efficacité de la méthode

    Max-Entropy Sampling for Deterministic Timed Automata Under Linear Duration Constraints

    No full text
    International audienceAdding probabilities to timed automata enables one to carry random simulation of their behaviors and provide answers with statistical guarantees to problems otherwise untractable. Thus, when just a timed language is given, the following natural question arises: What probability should we consider if we have no a priori knowledge except the given language and the considered length (i.e. number of events) of timed words? The maximal entropy principle tells us to take the probability measure that maximises the entropy which is the uniform measure on the language restricted to timed word of the given length (with such a uniform measure every timed word has the same chance of being sampled). The uniform sampling method developed in the last decade provides no control on the duration of sampled timed words.In the present article we consider the problem of finding a probability measure on a timed language maximising the entropy under general linear constraints on duration and for timed words of a given length. The solution we provide generalizes to timed languages a well-known result on probability measure over the real line maximising the Shannon continuous entropy under linear constraints. After giving our general theorem for general linear constraints and for general timed languages, we concentrate to the case when only the mean duration is prescribed (and again when the length is fixed) for timed languages recognised by deterministic timed automata. For this latter case, we provide an efficient sampling algorithm we have implemented and illustrated on several examples

    Cosmos: Evolution of a Statistical Model CheckingPlatform

    No full text
    International audienceCosmos is a statistical model checker for Hybrid Automata Stochastic Logic (HASL). HASL uses Linear Hybrid Automata (LHA), a generalization of Deterministic Timed Automata (DTA), to describe accepting execution paths of a Discrete Event Stochastic Process (DESP), a class of stochastic models which includes, but is not limited to, Markov chains. As a result, HASL verification turns out to be a unifying framework where sophisticated temporal reasoning is naturally blended with elaborate reward-based analysis. Cosmos takes as input a DESP (described in terms of a Generalized Stochastic Petri Net (GSPN)), a LHA and an expression Z representing the quantity to be estimated. It returns a confidence interval estimation of Z. Cosmos is written in C++ and is freely available to the research community. It is jointly developed by researchers of the Institute National de Recherche en Informatique et Automatque (INRIA) and of the Laboratoire Algorithmique Complexit´e et Logique ( LACL) of the Universit´e Paris-Est Cr´eteil. Since its introduction [8] the tool has evolved with the addition of a number of new features including support for rare-events systems and for hybrid systems

    Generation of Signals Under Temporal Constraints for CPS Testing

    No full text
    International audienceThis work is concerned with validation of cyber-physical systems (CPS) via sampling of input signal spaces. Such a space is infinite and in general too difficult to treat symbolically, meaning that the only reasonable option is to sample a finite number of input signals and simulate the corresponding system behaviours. It is important to choose a sample so that it best "covers" the whole input signal space. We use timed automata to model temporal constraints, in order to avoid spurious bugs coming from unrealistic inputs and this can also reduce the input space to explore. We propose a method for low-discrepancy generation of signals under temporal constraints recognised by timed au-tomata. The discrepancy notion reflects how uniform the input signal space is sampled and additionally allows deriving validation and performance guarantees. To evaluate testing quality, we also show a measure of uniformity of an arbitrary set of input signals. We describe a prototype tool chain and demonstrate the proposed methods on a Kinetic Battery Model (KiBaM) and a Σ∆ modulator

    Wordgen : a Timed word Generation Tool

    No full text
    International audienceSampling timed words out of a timed language described as a timed automaton may seem a simple task: start from the initial state, choose a transition and a delay and repeat until an accepting state is reached. Unfortunately, simple approach based on local, on-the-fly rules produces timed words from distributions that are biased in some unpredictable ways. For this reason, approaches have been developed to guarantee that the sampling follows a more desirable distribution defined over the timed language and not over the automaton. One such distribution is the maximal entropy distribution, whose implementation requires several non-trivial computational steps. In this paper, we present Wordgen which combines those different necessary steps into a lightweight standalone tool. The resulting timed words can be mapped to signals used for model-based testing and falsification of cyber-physical systems thanks to a simple interface with the Breach tool

    Performance modelling of access control mechanisms for local and vehicular wireless networks

    Get PDF
    International audienceCarrier sense multiple access collision avoidance (CSMA/CA) is the basic scheme upon which access to the shared medium is regulated in many wireless networks. With CSMA/CA a station willing to start a transmission has first to find the channel free for a given duration otherwise it will go into backoff, i.e. refraining for transmitting for a randomly chosen delay. Performance analysis of a wireless network employing CSMA/CA regulation is not an easy task: except for simple network configuration analytical solution of key performance indicators (KPI) cannot be obtained hence one has to resort to formal modelling tools. In this paper we present a performance modelling study targeting different kind of CSMA/CA based wireless networks, namely: the IEEE 802.11 Wireless Local Area Networks (WLANs) and the 802.11p Vehicular Ad Hoc Networks (VANETs), which extends 802.11 with priorities over packets. The modelling framework we introduce allows for considering: i) an arbitrarily large number of stations, ii) different traffic conditions (saturated/non-saturated), iii) different hypothesis concerning the shared channel (ideal/non-ideal). We apply statistical model checking to assess KPIs of different network configurations

    Coupling and Importance Sampling for Statistical Model Checking

    No full text
    International audienceStatistical model-checking is an alternative verification technique applied on stochastic systems whose size is beyond numerical analysis ability. Given a model (most often a Markov chain) and a formula, it provides a confidence interval for the probability that the model satisfies the formula. One of the main limitations of the statistical approach is the computation time explosion triggered by the evaluation of very small probabilities. In order to solve this problem we develop a new approach based on importance sampling and coupling. The corresponding algorithms have been implemented in our tool cosmos. We present experimentation on several relevant systems, with estimated time reductions reaching a factor of 10^120
    corecore